Dexterous robotic manipulation using deep reinforcement learning and knowledge transfer for complex sparse reward?based tasks
نویسندگان
چکیده
This paper describes a deep reinforcement learning (DRL) approach that won Phase 1 of the Real Robot Challenge (RRC) 2021, and then extends this method to more difficult manipulation task. The RRC consisted using TriFinger robot manipulate cube along specified positional trajectory, but with no requirement for have any specific orientation. We used relatively simple reward function, combination goal-based sparse distance reward, in conjunction Hindsight Experience Replay (HER) guide DRL agent (Deep Deterministic Policy Gradient (DDPG)). Our allowed our agents acquire dexterous robotic strategies simulation. These were applied real outperformed all other competition submissions, including those traditional control techniques, final evaluation stage RRC. Here we extend method, by modifying task require maintain particular orientation, while is moved required trajectory. also orient makes unable learn through blind exploration due increased problem complexity. To circumvent issue, make novel use Knowledge Transfer (KT) technique allows learned original (which was agnostic orientation) be transferred (where orientation matters). KT perform extended simulator, which improved average deviation from 0.134 m 0.02 m, 142{\deg} 76{\deg} during evaluation. concept shows good generalisation properties could actor-critic algorithm.
منابع مشابه
Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations
Dexterous multi-fingered hands are extremely versatile and provide a generic way to perform multiple tasks in human-centric environments. However, effectively controlling them remains challenging due to their high dimensionality and large number of potential contacts. Deep reinforcement learning (DRL) provides a model-agnostic approach to control complex dynamical systems, but has not been show...
متن کاملComposable Deep Reinforcement Learning for Robotic Manipulation
Model-free deep reinforcement learning has been shown to exhibit good performance in domains ranging from video games to simulated robotic manipulation and locomotion. However, model-free methods are known to perform poorly when the interaction time with the environment is limited, as is the case for most real-world robotic tasks. In this paper, we study how maximum entropy policies trained usi...
متن کاملDeep Reinforcement Learning for Robotic Manipulation
Reinforcement learning holds the promise of enabling autonomous robots to learn large repertoires of behavioral skills with minimal human intervention. However, robotic applications of reinforcement learning often compromise the autonomy of the learning process in favor of achieving training times that are practical for real physical systems. This typically involves introducing hand-engineered ...
متن کاملDeep Reinforcement Learning for Dexterous Manipulation with Concept Networks
Deep reinforcement learning yields great results for a large array of problems, but models are generally retrained anew for each new problem to be solved. Prior learning and knowledge are difficult to incorporate when training new models, requiring increasingly longer training as problems become more complex. This is especially problematic for problems with sparse rewards. We provide a solution...
متن کاملData-efficient Deep Reinforcement Learning for Dexterous Manipulation
Deep learning and reinforcement learning methods have recently been used to solve a variety of problems in continuous control domains. An obvious application of these techniques is dexterous manipulation tasks in robotics which are difficult to solve using traditional control theory or hand-engineered approaches. One example of such a task is to grasp an object and precisely stack it on another...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Expert Systems
سال: 2022
ISSN: ['0266-4720', '1468-0394']
DOI: https://doi.org/10.1111/exsy.13205